1. From Addition to Multiplication
In theoretical frameworks, linear transformations and translations (affine maps) are often handled separately. However, high-performance libraries like BLAS (Basic Linear Algebra Subroutines) are optimized specifically for matrix-vector and matrix-matrix products. To leverage these kernels, we express all operations as:
$$T(v) = Av$$
2. Homogeneous Coordinates
To implement a shift in $\mathbf{R}^n$ using a matrix, we expand to $\mathbf{R}^{n+1}$. A vector $[x, y, z]^T$ becomes $[x, y, z, 1]^T$. This "extra 1" allows a translation to be encoded in the last column of an $(n+1) \times (n+1)$ matrix.
A translation by $v_0 = [t_x, t_y, t_z]^T$ is represented by:
$$A = \begin{bmatrix} 1 & 0 & 0 & t_x \\ 0 & 1 & 0 & t_y \\ 0 & 0 & 1 & t_z \\ 0 & 0 & 0 & 1 \end{bmatrix}$$
The numbers $0, 0, 0, 1$ in the last row serve a critical role. When $A$ multiplies a vector with a final component of $1$, the resulting final component is:
$(0 \cdot x) + (0 \cdot y) + (0 \cdot z) + (1 \cdot 1) = 1$
This ensures the "affine" nature of the data is preserved, allowing for sequential operations without losing the coordinate system's integrity.
3. Implementation Standards: BLAS
Numerical efficiency relies on standardized subroutines. BLAS provides three levels of operations:
- Level 1: Vector-vector operations (e.g., dot products).
- Level 2: Matrix-vector operations ($Ax+b$).
- Level 3: Matrix-matrix operations ($AB+C$), which are the most computationally dense and hardware-efficient.